22 research outputs found

    Automated assembly inspection using a multiscale algorithm trained on synthetic images

    Get PDF
    Includes bibliographical references.An important part of a robust automated assembly process is an accurate and efficient method for the inspection of finished assemblies. This paper presents a novel multiscale assembly inspection algorithm that is used to detect errors in an assembled product. The algorithm is trained on synthetic images generated using the CAD model of the different components of the assembly. The CAD model guides the inspection algorithm through its training stage by addressing the different types of variations that occur during manufacturing and assembly. Those variations are classified into those that can affect the functionality of the assembled product and those that are unrelated to its functionality. Using synthetic images in the training process adds to the versatility of the technique by removing the need to manufacture multiple prototypes and control the lighting conditions. Once trained on synthetic images, the algorithm can detect assembly errors by examining real images of the assembled product. The effectiveness of the system is illustrated on a typical mechanical assembly.This work was supported by National Science Foundation grant number CDR 8803017 to the Engineering Research Center for Intelligent Manufacturing Systems, National Science Foundation grant number MIP93-00560, an AT&T Bell Laboratories PhD Scholarship, and the NEC corporation

    Automated visual assembly inspection

    Get PDF
    Includes bibliographical references (pages 699-700).This chapter has discussed an intelligent assembly inspection system that uses a multiscale algorithm to detect errors in assemblies after the algorithm is trained on synthetic CAD images of correctly assembled products. It was shown how the CAD information of an assembly along with fast rendering techniques on specialized graphics machines can be used for the automation of the work-cell camera and light placement. The current emphasis in the manufacturing industry on concurrent engineering will only cause this integration between the CAD model of products and its manufacturing inspection to grow in value

    Camera and light placement for automated assembly inspection

    Get PDF
    Includes bibliographical references.Visual assembly inspection can provide a low cost, accurate, and efficient solution to the automated assembly inspection problem, which is a crucial component of any automated assembly manufacturing process. The performance of such an inspection system is heavily dependent on the placement of the camera and light source. This article presents new algorithms that use the CAD model of a finished assembly for placing the camera and light source to optimize the performance of an automated assembly inspection algorithm. This general-purpose algorithm utilizes the component material properties and the contact information from the CAD model of the assembly, along with standard computer graphics hardware and physically accurate lighting models, to determine the effects of camera and light source placement on the performance of an inspection algorithm. The effectiveness of the algorithms is illustrated on a typical mechanical assembly.This work was supported by National Science Foundation grant number CDR 8803017 to the Engineering Research Center for Intelligent Manufacturing Systems, National Science Foundation grant number MIP93-00560, an AT&T Bell Laboratories PhD Scholarship, and the NEC Corporation

    The development and validation of a scoring tool to predict the operative duration of elective laparoscopic cholecystectomy

    Get PDF
    Background: The ability to accurately predict operative duration has the potential to optimise theatre efficiency and utilisation, thus reducing costs and increasing staff and patient satisfaction. With laparoscopic cholecystectomy being one of the most commonly performed procedures worldwide, a tool to predict operative duration could be extremely beneficial to healthcare organisations. Methods: Data collected from the CholeS study on patients undergoing cholecystectomy in UK and Irish hospitals between 04/2014 and 05/2014 were used to study operative duration. A multivariable binary logistic regression model was produced in order to identify significant independent predictors of long (> 90 min) operations. The resulting model was converted to a risk score, which was subsequently validated on second cohort of patients using ROC curves. Results: After exclusions, data were available for 7227 patients in the derivation (CholeS) cohort. The median operative duration was 60 min (interquartile range 45–85), with 17.7% of operations lasting longer than 90 min. Ten factors were found to be significant independent predictors of operative durations > 90 min, including ASA, age, previous surgical admissions, BMI, gallbladder wall thickness and CBD diameter. A risk score was then produced from these factors, and applied to a cohort of 2405 patients from a tertiary centre for external validation. This returned an area under the ROC curve of 0.708 (SE = 0.013, p  90 min increasing more than eightfold from 5.1 to 41.8% in the extremes of the score. Conclusion: The scoring tool produced in this study was found to be significantly predictive of long operative durations on validation in an external cohort. As such, the tool may have the potential to enable organisations to better organise theatre lists and deliver greater efficiencies in care

    The IDENTIFY study: the investigation and detection of urological neoplasia in patients referred with suspected urinary tract cancer - a multicentre observational study

    Get PDF
    Objective To evaluate the contemporary prevalence of urinary tract cancer (bladder cancer, upper tract urothelial cancer [UTUC] and renal cancer) in patients referred to secondary care with haematuria, adjusted for established patient risk markers and geographical variation. Patients and Methods This was an international multicentre prospective observational study. We included patients aged ≥16 years, referred to secondary care with suspected urinary tract cancer. Patients with a known or previous urological malignancy were excluded. We estimated the prevalence of bladder cancer, UTUC, renal cancer and prostate cancer; stratified by age, type of haematuria, sex, and smoking. We used a multivariable mixed-effects logistic regression to adjust cancer prevalence for age, type of haematuria, sex, smoking, hospitals, and countries. Results Of the 11 059 patients assessed for eligibility, 10 896 were included from 110 hospitals across 26 countries. The overall adjusted cancer prevalence (n = 2257) was 28.2% (95% confidence interval [CI] 22.3–34.1), bladder cancer (n = 1951) 24.7% (95% CI 19.1–30.2), UTUC (n = 128) 1.14% (95% CI 0.77–1.52), renal cancer (n = 107) 1.05% (95% CI 0.80–1.29), and prostate cancer (n = 124) 1.75% (95% CI 1.32–2.18). The odds ratios for patient risk markers in the model for all cancers were: age 1.04 (95% CI 1.03–1.05; P < 0.001), visible haematuria 3.47 (95% CI 2.90–4.15; P < 0.001), male sex 1.30 (95% CI 1.14–1.50; P < 0.001), and smoking 2.70 (95% CI 2.30–3.18; P < 0.001). Conclusions A better understanding of cancer prevalence across an international population is required to inform clinical guidelines. We are the first to report urinary tract cancer prevalence across an international population in patients referred to secondary care, adjusted for patient risk markers and geographical variation. Bladder cancer was the most prevalent disease. Visible haematuria was the strongest predictor for urinary tract cancer

    CAD driven multiscale approach to automated inspection, A

    Get PDF
    Includes bibliographical references (page V-400).In this paper we develop a general multiscale stochastic object detection algorithm for use in an automated inspection application. Information from a CAD model is used to initialize the object model and guide the training phase of the algorithm. An object is represented as a stochastic tree, where each node of the tree is associated with one of the various object components used to locate and identify the part. During the training phase a number of model parameters are estimated from a set of training images, some of which are generated from the CAD model. The algorithm then uses a fast multiscale search strategy to locate and identify the subassemblies making up the object tree. We demonstrate the performance of the algorithm on a typical mechanical assembly.This work was supported by an AT&T Bell Laboratories PhD Scholarship, the NEC Corporation, National Science Foundation grant number MIP93-00560, and National Science Foundation grant number CDR 8803017 to the Engineering Research Center for Intelligent Manufacturing Systems

    A Multiscale Stochastic Image Model for Automated Inspection

    No full text
    1 In this paper we develop a novel multiscale stochastic image model to describe the appearance of a complex three dimensional object in a two dimensional monochrome image. This formal image model is used in conjunction with Bayesian estimation techniques to perform automated inspection. The model is based on a stochastic tree structure in which each node is an important subassembly of the three dimensional object. The data associated with each node or subassembly is modeled in a wavelet domain. We use a fast multiscale search technique to compute the sequential MAP (SMAP) estimate of the unknown position, scale factor, and 2-D rotation for each subassembly. The search is carried out in a manner similar to a sequential likelihood ratio test, where the process advances in scale rather than time. The results of this search determine whether or not the object passes inspection. A similar search is used in conjunction with the EM algorithm to estimate the model parameters for a given objec..

    Multiscale assembly inspection algorithm, A

    No full text
    Includes bibliographical references.An important aspect of robust automated assembly is an accurate and efficient method for the inspection of finished assemblies. This novel algorithm is trained on synthetic images generated using the CAD model of the different components of the assembly. Once trained on synthetic images, the algorithm can detect assembly errors by examining real images of the assembled product.This work was supported by the NEC Corporation, National Science Foundation grant number CDR8803017 to the Engineering Research Center for Intelligent Manufacturing Systems, National Science Foundation grant number MIP93-00560, and an AT&T Bell Laboratories Ph.D. Scholarship

    A multiscale stochastic image model for automated inspection

    No full text

    IP EDICS �1.6 Image Processing � Multiresolution Processing

    No full text
    Abstract 1 In this paper we develop a novel multiscale stochastic image model to describe the appearance of a complex three dimensional object in a two dimensional monochrome image. This formal image model is used in conjunction with Bayesian estimation techniques to perform automated inspection. The model is based on a stochastic tree structure in which each node is an important subassembly of the three dimensional object. The data associated with each node or subassembly is modeled in a wavelet domain. We use a fast multiscale search technique to compute the sequential MAP �SMAP� estimate of the unknown position � scale factor � and 2�D rotation for each subassembly. The search is carried out in a manner similar to a sequential likelihood ratio test � where the process advances in scale rather than time. The results of this search determine whether or not the object passes inspection. A similar search is used in conjunction with the EM algorithm to estimate the model parameters for a given object from a set of training images. The performance of the algorithm is demonstrated on two di�erent real assemblies
    corecore